A Cognitively Inspired POMDP Planning Technique
نویسندگان
چکیده
A planning technique for probabilistic domains is presented, inspired by cognitive perceptual processes. The plan evolves as a by-product of situation understanding using spreading activation for cooperative support-building, and an annealing process to kill off weak results. We have implemented the technique and compared its results with one of the top planning techniques for this type of problem, showing much better scalability, with comparable solution
منابع مشابه
Risk-sensitive planning in partially observable environments
Partially Observable Markov Decision Process (POMDP) is a popular framework for planning under uncertainty in partially observable domains. Yet, the POMDP model is riskneutral in that it assumes that the agent is maximizing the expected reward of its actions. In contrast, in domains like financial planning, it is often required that the agent decisions are risk-sensitive (maximize the utility o...
متن کاملRisk-Sensitive Planning in Partially Observable Environments
Partially Observable Markov Decision Process (POMDP) is a popular framework for planning under uncertainty in partially observable domains. Yet, the POMDP model is riskneutral in that it assumes that the agent is maximizing the expected reward of its actions. In contrast, in domains like financial planning, it is often required that the agent decisions are risk-sensitive (maximize the utility o...
متن کاملPoint-Based Policy Transformation: Adapting Policy to Changing POMDP Models
Motion planning under uncertainty that can efficiently take into account changes in the environment is critical for robots to operate reliably in our living spaces. Partially Observable Markov Decision Process (POMDP) provides a systematic and general framework for motion planning under uncertainty. Point-based POMDP has advanced POMDP planning tremendously over the past few years, enabling POM...
متن کاملPUMA: Planning Under Uncertainty with Macro-Actions
Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan a different potential action for each future observation can be prohibitively expensive when planning many steps ahead. An efficient solution for planning far into the future in fully observable domains is to use temp...
متن کاملA POMDP Approach to Robot Motion Planning under Uncertainty
Motion planning in uncertain and dynamic environments is critical for reliable operation of autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled general framework for such planning tasks and have been successfully applied to several moderately complex robotic tasks, including navigation, manipulation, and target tracking. The challenge now is to scale ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010